Notes � Neuro II, Iles, reaching + movement

Greg Detre

Thursday, 18 October, 2001

Dr J Iles, St Hughes, week 2

 

Essay title

Discuss critically the role of the parietal cortex in spatial localisation, eye movements and reaching.

Possible alternative titles from exam papers

What are the relations between the supplementary, premotor, and motor cortical areas.

Discuss critically the role of the parietal cortex in spatial localization, eye movements and reaching.

How well do we understand the brain pathways and mechanisms that control exploratory eye-movements?

How does the central nervous system acquire and integrate sensory information about the positions of our limbs?

Compare and contrast the functional roles of motor cortical areas in the mammalian brain.

Compare how the brain controls rapid movements of the eyes and of the limbs?

Discuss the neural control of eye movements that mimic retinal image slip during head and body motion.

Notes � Kandel & Schwarz

parietal cortex � 675, 372, 831, 277, 825

Voluntary movement � ch 38

The primary motor cortex is no longer seen as a simple somatotopic motor representation, but rather a multitude of representations that overlap, allowing the cortex to organise combinations of movements for specific tasks. Movement-related neurons in the premotor areas may fire during movements related to specific tasks and not others to encode a more global feature, such as set-related neurons which are active in the absence of any overt behaviour, e.g. during a delay between task instructions and execution. The presupplementary motor area is active during the learning of a behaviour, but becomes less active as learning progresses, with activity in the supplementary area eventually ceasing when the behaviour becomes automatic. Thus, the hierarchy of motor control gives rise to a hierarchy of task features. Parts of the parietal cortex, together with the motor areas, are heavily involved in the planning and execution of voluntary movements, producing motor programs from the coordinate frames in which the external environment is represented.

The ocular motor system

The two major functions of the oculomotor system are: to bring targets onto the fovea � to keep them there.

There are five different types of eye movements (all sharing the same effector pathway � the three bilateral groups of ocular motor neurons in the brain stem).

two stabilise the eye during head movement

vestibulo-ocular

uses vestibular input (semi-circular canals) to hold images stable on the retina during brief or rapid head rotation, i.e. it changes the head velocity signal into an eye velocity signal

opto-kinetic

uses visual input to hold images stable on the retina during sustained or slow head rotation

three keep the fovea on a visual target

saccade

brings new objects of interest onto the fovea

smooth pursuit

holds the image of a moving target on the fovea

vergence

adjusts the eyes for different viewing distances in depth

We detect objects over a visual angle of about 200, although the fovea extends over only about 1 (1mm diameter).

Vestibular nystagmus is when the eyes flick backwards (quick phase) during sustained rotation. In the dark, the semicircular canals adapt after a few seconds (although brain stem circuitry extends this habituation time), so during sustained or slow head movement the vestibular signal ultimately fails and the eyes begin to move in space. However, in the light the opto-kinetic system compensates for the defects in the vestibular system by using the visual motion of head movement to drive the eyes. The optokinetic reflex has a long latency and slow buildup, interpreting visual (especially full-field) motion as head movement, e.g. the sensation of backward motion when stationary and the car next to you moves forward. The gain of the vestibular-ocular and opto-kinetic reflexes adapt, so we can adapt over a few days to reversing prisms and glasses, which change how head movement effects retinal movement. This requires the cerebellar flocculus � Miles et al. found that flocculuar Purkinje cells int eh monkey respond to the visual signal that arises from the mismatch of head velocity and eye velocity.

The smooth pursuit system tracks a moving object, voluntarily. Max velocity is about 100/s. It requires a moving stimulus to calculate the proper eye velocity. First, we make a rapid movement to catch up with the moving object, and then we follow it at the same speed. The rapid movement, like the quick phase of a nystagmus, is a saccade. When a stationary object starts moving, the eyes maintain their position for about 200ms before bringing the target back onto the fovea, in a highly stereotyped motion � a standard waveform that reflect a single smooth increase and decrease of eye velocity. Saccades are too fast (900/s) to modify theircourse by visual feedback, so corrections are made in small saccades after the primary one. Like the vestibulo-occular reflex, the saccadic system can adapt to changes in muscle function, e.g. weakness in one of the extraocular muscles (hypometric) � if the strong eye is patched, the system learns to compensate for the insufficient signal, and when the stronger eye is unpatched there is significant overshooting by the weaker one.

The vergeance movement is disconjugate (the eyes move in opposite directions when they converge/diverge to focus on objects at different distances from the viewer). Retinal disparity = the difference in retinal position of an object in one eye compared to its position in the other. Disparities used as cues for stereopsis can be a few tens of seconds of arc, wherease the retinal disparities that evoke vergence movcements require a few minutes of arc. Accommodation = contraction of the ciliary muscles to change the radius of curvature of the crystalline lens in the eye, reducing the out of focus blur when targets approach the eyes � this is related to vergence.

The eye is moved by three complementary pairs of muscles: four rectus muscles (superior, inferior, medial and lateral) and two oblique muscles (superior and inferior). The extraocular muscles are innervated by three groups of motor neurons whose cell bodies form nuclei in the brain stem (the abducens nucleus in the pons, the oculomotor nucleus in the midbrain at the level of the superior colliculus and the trochlear nucleus in the midbrain at the level of the inferior colliculus). The discharge frequency of each extraocular motor neuron is directly proportional to the position of the eye and to its velocity.

Notes � Ben Townsend�s essay

The parietal cortex of monkeys is situated caudal to the postcentral gyrus and superior to the Sylvian fissure. It can be divided into two major areas:an anterior part, consisting of the post central gyrus and the caudal bank of the central sulcus; and a posterior part, which can be divided into an upper (superior) and lower (inferior) lobule by the intraparietal sulcus. The anterior part contains Brodmann�s areas 1, 2, 3 and 43 and contains primary somatosensory cortex. The posterior part consists of areas 5a, 5b, 7a, 7b, and the anterior, lateral, medial and ventral intraparietal areas (AIP, LIP, MIP and VIP respectively). In addition, the area in between the intraparietal sulcus and the preluneate gyrus contains the middle temporal (MT) and medial superior temporal (MST) areas.

Area LIP receives inputs from extrastriate visual areas and sends outputs to prefrontal cortex , the caudate nucleus, and the superior colliculus � areas of the brain concerned with saccidic eye movements (Lynch et al 1985, Asanuma et al 1985, Blatt et al 1990). Area 7a has strong cortical connections with the visual areas, and with the parahippocampal gyrus and the cingulate cortex, the latter two being areas that are concerned with the highest cognitive functions. Areas 7b and VIP are connected more to the somatosensory system (Andersen et al, 1990a), but respond to visual stimuli as well. 7b projects to the supplementary motor area (SMA) while VIP outputs to the caudate and the cerebellum. Broadly speaking, areas 7a, 7b, LIP, VIP and MST appear to process information about spatial relationships.

Notes � Sakata (1997), The parietal association cortex in depth perception and visual control of hand action

 

Notes � Iwaniak & Whishaw (2000), On the origin of skilled forelimb movements

 

Notes � Goodale (1998), Frames of Reference for Perception and Action in the Human Visual System

 

Notes � Andersen (1997), Multimodal representation of space in the posterior parietal cortex and its use in planning movements

According to Andersen, the parietal lobe contains an abstract multi-modal distributed representation of space, combining vision, somatosensation, audition, and vestibular sensation, which �can then be used to construct multiple frames of reference to be used by motor structures to code appropriate movements�, as well as selecting stimuli and helping to plan movements. Efference copies of motor commands, probably generated in the frontal lobes, also converge on the posterior parietal cortex and provide information about body movements (all coded in different coordinate frames).

Posterior parietal

Andersen singles out various areas within the posterior parietal cortex as being extensively investigated and of particular interest: area 7a, the lateral intraparietal area (LIP), the medial superior temporal area (MST), area 7b, and the ventral intraparietal area (VIP).

Andersen et al (1992) speculate that LIP is the 'parietal eye field', 'specialised for visual-motor transformation functions related to saccades', on the basis of the strong direct projections from extrastriate visual areas and projections to various cortical and subcortical areas concerned with saccadic eye movements, and results from electrical stimulation.

MST (subdivided into MSTd (dorsal) and MSTl (lateral)) seems highly involved in motion processing. MSTd may be involved in navigation using motor cues, since many of the patterns of motion MSTd neurons are selective for occur with self-motion (e.g. expansion, contraction, rotation and spiraling), as well as having large receptive fields and receiving signals related to smooth pursuit eye movements and vestibularly derived head pursuit signals.

Area 7a has large bilateral fields, with strong cortical connections to other visual areas, as well as 'areas of the cortex associated with the highest cognitive functions, including the parahippocampal gyrus and cingulate cortex'.

Area 7b and VIP are closely tied in with the somatosensory system, and to a lesser extent, vision.

All of the areas are strongly interconnected via corticortical projections. Thus even seemingly unimodal visual areas like LIP and MST 'can reveal their multimodal nature when probed with the right set of tasks'.

Multimodal representation of space

Eye position and visual signals

Andersen claims that areas 7a and LIP use their eye position and retinal input signals to represent the location of a visual target with respect to the head, a 'head-centred reference frame'. He concedes that 'intuitively one would imagine that an area representing space in a head-centred reference frame would have receptive fields that are anchored in space with respect to the head', but proposes instead that instead a highly distributed pattern is used to uniquely specify each head-centred location in the activity across a population of cells with different eye position and retinal position sensitivities. Indeed, he argues that 'when neural networks are trained to transform retinal signals into head-centred coordinates by using eye position signals, the middle-layer units that make the transformation gain fields similar to the cells in the parietal cortex (Zipser & Andersen, 1988)'.

Head position

A body-centred reference frame combines information about where the retinal position and head orientation. About half of all 7a and LIP cells that have eye gain fields also have head gain fields. He argues that these cells were generalising for a body-centric gaze direction(???).

There are at least three sources for information about head position: the motor signals to move the head (an 'efference copy'), a vestibular signal and neck proprioceptive signals.

Snyder et al (1993) showed that supplying solely vestibular signals (in the dark to isolate from visual input), or solely proprioceptive cues (rotating the trunk while keeping the head fixed), both elicit responses from these cells in 7a and LIP, indicating that both of these sources are used in constructing the body-centred frame. Furthermore, input from the vestibular and visual systems (e.g. landmarks and optic flow) can contribute to a world-centred representation.

Auditory signals

Binding visual and auditory signals is difficult, yet seems to come so naturally. It requires a representation abstract enough to combine 2-D retinal inputs (eye-centred), with computations of intra-aural time, intra-aural intensity and spectral cues from both ears (head-centred).

This problem is related to the ease with which we are able to saccade to auditory and tactile stimuli and verbal commands, as well as visual stimuli. It requires there to be somewhere where these modalities can be integrated and processed before the oculomotor area can send signals to the extraocular muscles around the eyes.

Mazzoni et al (1996) recently demonstrated that when a monkey is required to memorize the location of an auditory target in the dark and then to make a saccade to it after a delay, there is activity in LIP during the presentation of the auditory target and during the delay period. This auditory response generally had the same directional preference as the visual response, suggesting that the auditory and visual receptive fields and memory fields may overlap one another.

The above experiments were done when the animal was fixating straight ahead, with its head also oriented in the same direction. Under these conditions, the eye and head coordinate frames overlap. However, if the animal changes the orbital position of its eyes, then the two coordinate frames move apart. Do the auditory and visual receptive fields in LIP move apart when the eyes move, or do they share a common spatial coordinate frame?

Stricanne et al (1996) showed that almost half of the auditory-responding cells in LIP coded the auditory location in eye-centred coordinates, like in the superior colliculus, where auditory fields are also in eye-centred coordinates (Jay & Sparks, 1984).

A third were coded in head-centred coordinates, and a quarter were intermediate between the two coordinate frames. Cells of all three types also had gain fields for the eye. He argues that 'at least this subpopulation shares a common, distributed representation with the visual signals in LIP'(???).

Visual motion and pursuit

MST may be solving an important spatial problem, that of 'computing the direction of self-motion in the world based on the changing retinal image'.

Visual navigation = finding one's heading based on visual information, e.g. using the centre of expanding visual motion generated by self-motion as the direction of heading (Gibson, 1950). Interestingly, we can recover the direction of heading even when we are fixating/tracking an object that is not directly ahead of us. The resulting optic flow field needs to be decomposed into a) the movement of the observer (expanding field) and b) eye rotation (linearly moving field). It seems (Royden et al, 1992) that an efference copy of the pursuit command may be very helpful in recovering the direction of heading.

Handily, MSTd contains cells selective for one or more of the following: expansion-contraction, rotation and linear motion (Saito, 1986). However, it appears that MSTd is not decomposing the optic flow into channels of expansion, rotation and linear motion - Andersen produced a spiral space with expansion on one axis and rotation on another, and found that disappointingly few of the MSTd neurons had tuning curves aligned directly along these axes. Interestingly though, the MSTd neurons displayed a high degree of position and size invariance, as well as form/cue invariance. The MSTd cells seems to convey 'the abstract quality of a pattern of motion, e.g. rotation', which may be important in analysing optic flow by gathering information from any part of the visual field. MST may use cells sensitive to motion pattern in combination with the pursuit eye movement signal it receives to code direction of heading. They found that many MSTd neurons shift their receptive fields during eye pursuit movements to more faithfully code the direction of heading than the focus of expansion on the retina. For instance, when viewing an expanding pattern while making a pursuit movement towards, say, the left, the retinal position of the focus shifts left, which many expansion-selective MSTd neurons compensate for by shifting their receptive fields to the left (and often vice versa for rightward movements).

When the eyes move, the focus tuning curve of these cells shifts in order to compensate for the retinal focus shift due to the eye movement. In this way MSTd could map out the relationship between the expansion focus and heading with relatively few neurons, each adjusting its focus preference according to the velocity of the eye.

Perrone & Stone's (1994) and Warren's (1995) similar models require more neurons for separate heading maps for different combinations of eye direction and speed (rather than just eye movement)(???).

This pursuit compensation is achieved by a non-uniform gain and distortion applied to different locations in the receptive field. He details 2 methods by which this might be accomplished.

Experiments have yet to determine in which coordinate frame the direction of heading is coded. If it's eye-centred, then eye and head gain fields could map this direction of heading signal to other coordinate frames for appropriate motor behaviours such as walking or driving.

MSTd may also be more generally used in providing perceptual stability during tracking movements. Rotation cells' focus of rotation is also displaced during pursuit eye movements, only orthogonal to eye movement direction (rather than in the same direction, like expansion) - and indeed the focus tuning (of the receptive fields???) of rotation cells in MSTd shifted orthogonal to the direction of pursuit.

MSTd may compensate spatially for the consequences of eye movements for all patterns of motion.

Models of coordinate transformation

Neural network models can illustrate methods employing gain fields to transform between coordinate frames.

Transformations between coordinate frames

Mentions Zipser & Andersen (1988) again, which showed that when 'retinal position signals are converted to a map of the visual field in head-centred coordinates, the hidden units that perform this transformation develop gain fields very similar to those demonstrated in the posterior parietal cortex', and that the activities found for posterior parietal neurons could be the basis of a distributed representation of head-centred space.

Converting auditory and visual signals to oculomotor coordinates

In Xing et al's (1995) model, which takes in head-centred auditory signals and eye position and retinal position signals as input, and whose output codes the metrics of a planned movement in motor coordinates, the middle layers develop overlapping receptive fields for auditory and visual stimuli and eye position gain fields. It is interesting that the visual signals also develop gain fields, since both the retinally based stimuli and the motor error signals are always aligned when training the network and, in principle, do not need to use eye position information. However, the auditory and visual signals share the same circuitry and distributed representation, which results in gain fields for the visual signals.

Multiple coordinate frames in parietal cortex

By using the gain field mechanism, a variety of modalities in different coordinate frames can be integrated into a distributed representation of space.

In this way, information is not collapsed and lost - for instance, if the gain field mechanism is used to produce a head-centred frame from retinal position and eye position, the eye-centred coordinates could be read out by another structured - the two components have not been converged - it's almost like shifting all the information in a spreadsheet one column along.

Lesions to the posterior parietal cortex give rise to spatial deficits in multiple coordinate frames. This could be because many coordinate frames might conceivably representable in the same population of neurons. Or it could simply be that the different coordinate frames exist in close proximity to one another and so would all be affected at the same time.

Converting retinotopic signals to oculocentric coordinates

No coordinate transformation is necessary for a simple visual saccade. However, there are occasionally times when the oculomotor ocordinates are in a different frame from sensory-retinal coordinates (e.g. displacement of the eye from electrical stimulation or an intervening saccade), yet the cells in the PPC, frontal eye fields and superior colliculus are still able to code the impending movement vector, even though no visual stimulus has appeared in their receptive fields. Krommenhoek et al's (1993) and Xing et al's (1995) networks were able to replicate this result, both developing eye gain fields in the hidden layer. The Xing et al neural network was trained on a double-saccade task; it inputted two retinal locations and then outputted the motor vectors of two eye movements, first to one target and then to the other. In order to program the second saccade accurately, the network was required to use the remembered retinal location of the first target and update it with the new eye position. This implies that an implicit distributed representation of head-centred location was formed in the hidden layer.

Algorithms for gain fields

Multiplicative, additive and ceiling effects (which you can see in NNs in terms of where on the sigmoidal activation function the summed inputs lie) have all been observed in the recording data.

Cognitive intermediates in the sensory-motor transformation process

The PPC also contains circuitries that appear to be important for shifting attention, stimulus selection and movement planning.

Attention

Patients with lesions to the PPC have difficulty shifting their focus of attention (Posner et al, 1984). It now seems that visual responsiveness of parietal neurons is actually reduced at the focus of attention (Robinson et al, 1995), while locations away from the focus of attention are more responsive, apparently signaling novel events for the shifting of attention.

Intention

Gnadt & Andersen (1988) have shown that activity in cells primarily in LIP (coding in oculomotor coordinates) precedes saccades. This activity is also memory-related, e.g. lighting up when a monkey is remembering the location of a briefly-flashed stimulus and, after a delay, made a saccade to the remembered location. Glimcher & Platt required an animal to attend to a distractor target, which was extinguished as a cue to saccade to the selected target, thus separating the focus of attention from the selected movement. For many of the cells, the activity reflected the movement plan and not the attended location, although the activity of some cells was influenced by the attended location. Andersen thinks that these and other studies suggest that a component of LIP activity is related to movements that the animal intends to make.

Mazzoni et al (1996) used a delayed double-saccade experiment to try and distinguish whether the memory activity was primarily related to intentions to make eye movements or to a sensory memory of the location of the target. They found both types of cells, with the majority of overall activity being related to the next intended saccade and not to the remembered stimulus location. This did not necessarily lead to execution of the movement, since the animals could be asked to change their planned eye movements during the delay period in a memory saccade task, and the intended movement activity in LIP would change correspondingly (Bracewell et al, 1996).

If it could be shown that the activity is related to the type of movement being planned, it would be a strong indication that the activity is intention-related.

Bushnell et al (1981) recorded from PPC neurons while the animal programmed an eye or reaching movement to a retinotopically identical stimulus. They claimed that the activity of the cells did not differentiate between these two types of movements, indicating that the PPC is concerned with sensory location and attention and not with planning movements.

However, when Andersen et al repeated the experiment, they found that 2/3 of cells in the PPC were selective during the memory period for whether the target requires an arm or eye movement.

Intention activity occurs when a monkey considers a movement

They also added a control experiment, which involved both an arm and eye movement (sometimes in opposite directions, and so sometimes one within and one without the receptive field), then they were able to conclude that plans for both movements were represented by subpopulations of cells in the PPC, even if only one movement would eventually be made(??? pg 24).

Andersen considers Duhamel et al, 1992 (similar to Gnadt & Andersen, 1988) and Kalaska & Crammond, 1995 as studies in which their theory that the memory-related activity in the PPC signals the animal's plan to make a movement could explain the results.

Thus, when stimulus-related activity comes into the parietal cortex, it can sometimes invoke more than one potential plan, e.g. both eye and limb movements, even if the limb movement is not executed.

Summary and conclusions

This coding of signals in the coordinates of movement is consistent with the recent proposal of Goodale & Milner (1992) that posterior parietal cortex is an action system specifying how actions can be accomplished.

Notes � Husain & Jackson (2001), Visual space is not what it appears to be

Probe stimuli that appear very briefly at a wide range of stimulus locations immediately prior to the execution of a saccadic eye movement are not perceived to be in their veridical positions, but are instead reported to be at locations compressed towards the target of the saccadic eye movement - the intended new point of fixation or direction of gaze. If no saccadic eye movement is planned, then the location is misreported closer to the point of fixation (Ross & Morrone, 1997).

The same observations were made when subjects reported the location of the probe verbally (by calling out a number corresponding to a visual or memorised scale), just before making a saccade. When post-saccadic information was not available, subjects' localisation of the probe stimulus by pointing was extremely accurate, as were their verbal reports. However, when the subjects' vision was not occluded, both pointing and verbal reports demonstrated the same compression towards the point of fixation or saccade. Burr et al think this supports the idea of two separate visual systems, one for conscious perception (more plastic and subject to spatial distortion) and the other for the control of action.

Husain & Jackson suggest that 'the findings of Burr et al are consistent with recent demonstrations of separate neural systems for representing movements in eye-centred and body-centred coordinates. For example, when some patients with hemispatial visual neglect following posterior cortical lesions reach towards visual targets, their trajectories may be spatially distorted, but this distortion does not occur when they reach to proprioceptively-defined targets [8].'

Movements may be planned and controlled within body-part-centred coordinate systems, e.g. eye-centred representations may be responsible for dynamic remapping across saccades, whereas body-centred representations are not.

They suggest that these 'compression' effects reside in the PPC. In the monkey, electrophysiological studies have identified multiple representations of space within this area [10-13], each associated with different types or combinations of action. They suggest [14] that the remapping going on in LIP while monkeys make saccadic eye movements might be from a coordinate system with the initial fixation point as its origin, to one with the upcoming fixation point as its origin.

They argue that this eye-centred remapping may be the (unknown) mechanism behind the spatial 'compressive' effect. 'Some LIP neurons continue to encode both the original point of fixation as well as the new intended point of fixation at around the time of the saccade, effectively encoding space in a coarser representation'. Also, parts of the LIP appear to be involved in maintaining a memory trace for the location of saccadic targets across delays.

When the intraparietal sulcus is lesioned in humans, a profound spatial deficit which Husain & Jackson explain as an impairment in spatial re-mapping across saccades when tested on a double-saccade task [16,17], in that they are unable to '[make] saccades commensurate with the original retinal position of the second target', i.e. failing to take into account the new eye position after the first saccade. They suggest that this may account for one component of the hemispatial neglect syndrome which follows parietal damage.

They then consider the compression seen in visually-guided pointing movements in healthy humans.

They say that 'recent electrophysiological studies of the superior parietal lobe have demonstrated the existence of a 'parietal reach region', where 'representations associated with reaching movements appear to be eye-centred rather than body-centred'. Apparently, visual signals are combined with hand position and movement signals, as well as eye position and movement signals, in the parietal reach region.

In humans, lesions of the SPL lead to misreaching to peripheral visual targets - optic ataxia. They're able to reach towards proprioceptively-defined targets, but not visual stimuli (especially targets in their peripheral vision). When reaching to peripheral visual targets while fixating centrally, they err towards the direction of gaze, 'magnetic misreaching'. They speculate that this extreme compression 'may result from an imbalance between visual representations arising from the fovea and those originating from the peripheral retina.

They argue that the veridical pointing without visual input is explicable because such movements towards remembered locations would not be coded in eye-centred but in body-centred (proprioceptive) coordinates (found in areas 5 and 7b). Consistent with this, lesions of these regions lead to misreaching in the dark, but not the light.

Structure

Discuss critically the role of the parietal cortex in spatial localization, eye movements and reaching.

describe the parietal cortex anatomically

perhaps explain briefly how the motor system performs reaching and eye movements

possibly consider the origin of forelimb movements in phylogenetic/evolutionary terms

consider goodale�s visuomotor vs visuoperceptual streams (much more than just what/where)

briefly describe the dorsal visual stream

consider lesion + agnosia evidence (Farah)

using andersen, argue that the parietal cortex fits the bill as part of the visuomotor stream

bring in sakata re parietal cortex in depth perception etc.

discuss criticisms (especially re andersen)

conclude

Questions

Kandel & Schwarz - the ocular motor system

what�s the difference between opto-kinetic and vestibulo-ocular??? what�s the difference between opto-kinetic and nystagmus???

does the opto-kinetic system work for body movement too, i.e., it�s just about retinal movement however caused, right???

nystagmus � what�s it used for???

presumably we can track a moving target with smooth pursuit even while our head�s moving, can�t we??? does that involve 2 systems at once, additively???

how do vergence, blur and accommodation fit together???

how is the discharge frequency of each extraocular motor neuron directly proportional to the position of the eye???

Sakata (1997), The parietal association cortex in depth perception and visual control of hand action

tarsiers

ataxia

apraxia

Iwaniak (2000), On the origin of skilled forelimb movements

homologizing

analogy or homoplasy

ungulates

therian

monotremes

Goodale (1998), Frames of Reference for Perception and Action in the Human Visual System

Andersen (1997), Multimodal representation of space in the posterior parietal cortex and its use in planning movements

allocentric

head-gain fields etc.

what does this mean: �Efference copies of motor commands, probably generated in the frontal lobes, also converge on the posterior parietal cortex and provide information about body movements (all coded in different coordinate frames)� and what does it achieve???

what functions/processing do relate to the planning of movements???

where�s the striate??? is it just V1???

isn't 7a also tied in with the somatosensory system??? what do the parahippocampal gyrus and cingulate cortex do, and where are they???

what does subcortical mean???

what is an eye 'gain field', and how is andersen proposing that the transformation into a head-centred frame is effected???

is there an efference copy to the parietal cortex, i.e. do the cells in 7a and LIP respond to motor signals to the neck as well as proprioceptive and vestibular???

how's this related to 'place fields' in the rat, and Rolls + Stringer's work???

why are auditory fields encoded in eye-centred coordinates in the superior colliculus??? what does the superior colliculus do, and what pathways project to/from it???

what is the gain field mechanism???

it�s when one component of a (local) frame of reference is stripped away or subtracted, e.g. eye movements, to give a higher-order frame of reference (e.g. head-centred), by shifting the receptive fields of the cells, so that the inputs the cells are receiving are as they would have been had that dimension been kept static

why does it implied that Xing�s model formed �an implicit distributed representation of head-centred location was formed in the hidden layer� when outputting its two motor vector saccades??? (pg 17)

I suppose it�s head-centred, because it was outputting motor vectors for eye movements, which have to be located within a higher-order (i.e. head-centred) coordinate frame themselves

how do they know in experiments that there is no visual input to the PPC, frontal eye fields and superior colliculus when they are forced to code double saccades etc.???

in their experiment to show that different types of movement (and so different intentions) lead to different activity in the PPC, when they say that 'one half of the sensory responses to the flashed targets also distinguish between the type of movement the light calls for' (pg 23), did Andersen et al ensure that there's no colour representation in the PPC???

what are the Brodmann areas for the LIP, VIP, MST etc.???

where else (e.g. premotor) supposedly codes for basic reaching/motor/ocular motor intentions???

Husain & Jackson (2001), Visual space is not what it appears to be

why does the post-saccadic vision make a difference??? how does this demonstrate Goodale & Milner's point???

i think it's intended to demonstrate the 'dynamic nature of the representations of visual space immediate prior to the execution of saccadic eye movements'.

no, it's to do with conscious perception vs control of action

where's the superior parietal lobe - what does it correspond to??? what about the parietal reach region???

so what is a body-centred representation for them - is it higher-order, or do they mean trunk-centred??? argh, what does it mean normally???

if they're right, would this dispense with the need for body-centred representations??? could they, say, take Andersen's gain field theory on board or something similar, so that the remappings could be dynamic and almost instantaneous, or would the remappings be slow in processing time (as they would on a serial computer, for instance)???

what would be the point of be a remapping from a coordinate system with the initial fixation point as its origin, to one with the upcoming fixation point as its origin???

if it is the re-mapping that's behind the spatial effect, why doesn't it work properly??? why does the post-saccadic information make a difference??? surely post-saccadic information would allow them to make a more accurate, online re-mapping, (as Goodale & Milner originally suggested???)???

if 'some LIP neurons [do] continue to encode both the original point of fixation as well as the new intended point of fixation at around the time of the saccade, effectively encoding space in a coarser representation', isn't that a bit pointless??? and wouldn't that be a horribly coarse representation???

how would Andersen reply to all this???

what is the hemispatial neglect syndrome that follows parietal damage???

what are electrophysiological studies???

I think it�s basically anything that they usually do on monkeys rather than humans

what's the point of an eye-centred representation for reaching movements???

what's the difference between magnetic misreaching and compression???

is there a difference between body-centred and proprioceptive coordinates???

why would remembered locations be any more likely to be coded in body-centred coordinates???